home
***
CD-ROM
|
disk
|
FTP
|
other
***
search
/
CD School House 10
/
CD School House - Education and Games (10.0) - Wayzata Technology (1995).iso
/
mac
/
DOS
/
GRAPHICS
/
IMPROC41
/
PRIMER.DOC
< prev
next >
Wrap
Text File
|
1992-10-10
|
21KB
|
465 lines
IMPROCES(C). Copyright John Wagner 1991. All rights reserved.
An Image Processing and VGA Primer
This article may only be distributed as part of the IMPROCES, Image
Processing Software package by John Wagner. IMPROCES is used for the
examples and it is assumed the reader has a copy of the program. This
article may not be reproduced in any manner without prior permission
from the author.
The Image and The Screen:
Images are represented as a series of points on a surface of varying
intensity. For example, on a monochrome or black and white photograph,
the points on the image are represented with varying shades of grey. On
a computer screen, these points are called pixels. Pixels on the screen
are mapped into a two dimensional coordinate system that starts at the
top, left corner with the coordinate 0,0. The coordinates in the X
direction refer to pixels going in the right (horizontal) direction and
coordinates in the Y direction refer to pixels going in the down
(vertical) direction. The coordinates X,Y are used to define a specific
pixel on the screen. When a computer image is said to have a resolution
of 320x200x256, it means that the image has a X width of 320 pixels, a Y
length of 200 pixels and contains 256 colors.
Pixel Mapping on 320x200 video screen:
0,0 319,0
X-------->
Y
|
|
V
0,199 319,199
Color:
A shade of grey is defined as having equal levels of Red, Green and Blue
(RGB). A color image however is represented as having points that are
represented by varying levels of RGB. NOTE: The color model of RGB is
just one way of representing color. Conveniently it is also the model
that computers use when representing colors on a screen. For that reason,
it will be the model we will use here!
When splitting up a color into its RGB components, it is common to use a
number between 0 and 1 (or percentage of total color) to represent the
colors RGB intensities. R=0, G=0, B=0 (usually shown as 0,0,0) would be
the absence of all color (black) and 1, 1, 1 would be full intensity for
all colors or white. .5, 0, 0 would be half red, while 0, .25, 0 would
be one quarter green. Using this model, an infinite amount of colors are
possible. Unfortunately, personal computers of this day and age are not
capable of handling an infinite amount of colors and must use
approximations when dealing with color.
The VGA:
With the advent of the VGA video subsystem for IBM PC's and compatibles,
Image Processing is now available to the home PC graphics enthusiast.
Also, since this article is meant to be used with IMPROCES, it will help
to understand the limitations of hardware we are working with. The VGA
can display 256 colors at one time out of a possible 262,144. The number
262,144 is not a magical number. It comes from the limitations of the
VGA hardware itself. The VGA (and many common graphics subsystems)
represent their 256 colors in a look up table of RGB values so that the
memory where the video display is mapped need only keep track of one
number (the look up value) instead of three (see diagram 1). The look up
table of the VGA allows for 64 (0 to 63) levels of RGB for each color.
This means that 64 to the 3rd power of different colors are possible.
64^3=262,144. Because only 64 levels of RGB are possible, it is only
possible to represent 64 shades of grey with a VGA. Remember that a
shade of grey is defined as equal levels of RGB. There is also a
disparity in the common usage of values from 0 to 1 to define a level of
RGB. However, this problem is easily solved by dividing the VGA look up
table RGB number by 64 to get its proper percentage of the total color
(exp. 32/64=.5 or 50% of total color).
Diagram 1. Sample Color Look Up Table (LUT) for VGA
Color R G B
1 0 23 56
2 34 24 45
3 23 12 43
....
254 13 32 43
255 12 63 12
Another important aspect of the VGA is that it not only allows up to 256
colors to displayed at one time. There are now Super VGA cards that
allow you to work in display resolutions up to 1024x768 with 256 colors.
These Super VGA cards have more memory on board so they can handle the
higher resolutions. It is important to understand that the amount of
memory on the video board will determine the highest resolution it can
handle.
Video Memory is bitmapped, in a 256 color mode, one pixel requires one
byte of memory as a byte can hold a value from 0 to 255. VGA video
memory begins at the hexidecimal address A0000000. The VGA video memory
area is 64K in length. Because of the length of the VGA memory area, the
maximum resolution a standard VGA card can achieve is 320x200x256. This
is because 320 pixels (bytes) times 200 pixels (bytes) equals 62,000
bytes (62.5K) and fills the video memory area.
pixel 0,0───┐ 0,1 0,2
│ │ │
Memory Address A0000000 ┘ │ │
A0000001 ───┘ │
A0000002 ────────┘
But wait a second, how can we get video modes up to 1024x768? If you
want a resololution of 1024x768x1byte(256 colors), you require 1024x768
bytes of video memory, 786,432 bytes, or 768K. Because the architecture
of the PC only allows for a maximum of 64K of VGA memory, a Super VGA
card maintains its own pool of memory that is displayed to the screen
and swaps memory in and out of the 64K "proper" VGA video memory address
space so that programs can write to it.
IMPROCES Stuff:
IMPROCES image processing functions all work in a defined area called the
WORK AREA. The default work area starts at the top-left corner of the
screen and ends at pixel 196 in the X (horizontal) direction and at
pixel 165 in the Y (vertical) direction. These aren't magical numbers
either. A friend of mine lent me a CCD device to capture greyscale
images and process them with IMPROCES. The size of the image the CCD
device output was, you guessed it, 196x165. You can change the WORK AREA
by selecting WORK AREA from the ENHANCE pull down menu and then defining
a new rectanglar area for IMPROCES to use.
The Histogram:
With all of that out of the way, lets examine a few things we can learn
from an image without actually modifying it. A tool that is commonly
used to determine the overall contrast of an image is the Histogram. A
histogram is defined as the measure of the distribution of a defined
set. As you probably know, histograms are not unique to image
processing.
The histogram takes a count of all the values on the image and
displays them graphically. When I say a count, I mean how many pixels
that contain the color in LUT 0 through 255. The count of a color is
called that colors BIN.
To get a histogram in IMPROCES, select AREA HISTO from the ENHANCE pull
down menu. The histogram is displayed from the left to the right
starting at color 0 and working over a column at a time to color 255.
The BIN's are displayed as lines going upward. You can move the mouse to
a desired BIN and click on that column to get that columns exact count,
which is shown in the lower right corner. You can also press 'S' to save
the histogram to an ASCII file that you can examine later.
Assuming that the image is a greyscale image, the histogram shows us
the overall contrast of the image by how much of the greyscale is covered
by the image. A high contrast image will cover most of the greyscale while
a low contrast image will only cover a small portion of the greyscale.
Examples of Histograms:
High Contrast Image:
-100 x6
│││ -
│ ││││ -
││││││││ -
│││││││││ -
│││││││││││ -
└┴┴┴┴┴┴┴┴┴┘ -0
0 255
Low Contrast Image:
-100 x8
│││ -
││││ -
││││ -
││││ -
││││ -
────┴┴┴┴─── -0
0 255
Contrast Enhancement:
Contrast enhancement is one of the easiest to understand of the image
processing functions. As of version 3.0 of IMPROCES, contrast stretching
will only work properly on images with a greyscale palette. Future
versions of IMPROCES will probably allow for contrast stretching of
color images. Contrast Stretching will take a portion of the greyscale
and stretch it so that it covers a wider portion of the greyscale. To do
this, you must first define an area of the greyscale that you would like
to stretch. IMPROCES provides three ways to do this. All of the methods
use two variables, one called Low_CLIP (L_CLIP) and one called the
High_CLIP (H_CLIP). Depending on which method you use, the variables
will be used in different ways.
When using the CNTR STRTCH method, the first BIN working up from 0 that
contains more pixels then the value of L_CLIP will become the color 0
(black), any BINS below that value are set to 0 as well. The first BIN
working down from 255 that contains more pixels then the value of H_CLIP
will become the value 255 (white) and any BINS above that will become
255 as well. All of the BINS in between will be remapped between 0 and
255 by a ratio of where they where in respect to the original LOW and
HIGH CLIP values.
Take the original low contrast image:
Low Contrast Image:
-100 x8
│││ -
││││ -
││││ -
││││ -
││││ -
────┴┴┴┴─── -0
0 255
L_CLIP and H_CLIP are both set to 30 so the L_CLIP and H_CLIP will
hit at these points:
-100 x8
│││ -
││││ -
││││ -
││││ -
││││ -
────┴┴┴┴─── -0
0 | | 255
L_CLIP H_CLIP
These BINS will now be reset to 0 and 255 and the BINS in between are
set in respect to there original location to the L_CLIP and H_CLIP
BINS:
Constrast Stretched Image:
-100 x8
│ │ │ -
│ │ │ │ -
│ │ │ │ -
│ │ │ │ -
│ │ │ │ -
└──┴───┴──┘ -0
0 255
L_CLIP H_CLIP
The resulting image will have its contrast stretched across the entire
greyscale, resulting in a higher contrast image.
The standard CNTR STRCH works well for a lot of images. The problem is,
rarely will an image be spread so evenly across the greyscale to begin
with. A lot of times there will be spikes in the histogram at either
end that you might want to remove.
Histogram with spikes:
-100 x6
│ │ │ -
││ │││ │ -
││ ││││ │ -
│││││││││ -
│││││││││ -
─┴┴┴┴┴┴┴┴┴─ -0
0 255
Using the standard contrast stretch, you would not be able to get over
these spikes if you wanted to stretch the middle of the histogram.
IMPROCES provides the CNTR VSTCH for this purpose. This method uses the
variables L_CLIP and H_CLIP to pick which BIN you want to be the L_CLIP
and H_CLIP values, without regard to their values. The only thing to
remember is not to set the L_CLIP higher then the H_CLIP, otherwise the
program will give you an error message. Besides the difference in the
way the program uses the variables, VSTCH works identical to STRCTH.
CNTR LSTRCH works the same as VSTCH in the respect that the variables
are used to pick the L_CLIP and H_CLIP values. The difference is that
the BINS that are below the L_CLIP are not set to 0, they are left alone
and the same for the BINS above the H_CLIP value are left alone, not set
to 255. Only the BINS in between L_CLIP and H_CLIP are stretched between
0 and 255.
Convolution:
Convolution is a another common method of processing an image. It uses a
pixels neighbors as a function of its new value. The main points of
convolution can be explained rather easily:
A matrix is decided upon that contains certain values. This matrix is
called a Kernel. The kernel is then passed over the image from left
to right, top to bottom, with the value of the center pixel being
replaced with the sum of the product of the kernel values and the
pixels under them.
A kernel with an odd number for its width and length is used. IMPROCES
uses a 3x3 kernel:
┌───┬───┬───┐
├───┼───┼───┤
├───┼───┼───┤
└───┴───┴───┘
An odd length and width is used so that there will be center point on the
kernel. Values are then assigned to the kernel:
Sharpening kernel:
-1 -1 -1
-1 9 -1
-1 -1 -1
The kernel is then passed over the image from left to right, top to
bottom. The kernal is then multiplied, point by point with the
with pixels in the 3x3 section of the image under it. The products
are then summed and the middle pixel in the image under the kernal
is then replaced with this new value.
Sharpening kernel:
-1 -1 -1
-1 9 -1
-1 -1 -1
Area of image being processed:
23 34 25
23 43 21
23 43 43
Product of values:
-23 -34 -25
-23 387 -21
-23 -43 -43
Sum of products:
156
Convoled Area of image:
23 34 25
23 156 21
23 43 43
The new value of the center pixel is then written to the screen. You
should note that the output from the previous operation is not used for
input in the next operation. The input and output image must be treated
separately. IMPROCES does this operation in place by using two rotating
three line buffers. The resulting image is said to have convoled from
the original.
The example shows a sharpening kernel. A sharpening kernel is actually a
Laplacian kernel with the original image added back in. The laplacian is
an edge detecting kernal, so when you detect the edges of the image and
then add the original image back in, you sharpen the edges, thereby improving
the sharpness of the image. A laplacian kernel looks like so:
Laplacian kernel:
-1 -1 -1
-1 8 -1
-1 -1 -1
You will notice that the center value for the laplacian is an 8 and the
sum of the kernel is 0. What this does to an image is make areas that
have no real features and are close to being a continous tone disappear
and leave features that have a lot of contrast with their neighbors. This
is the function of an edge detector. If the kernal has a sum of 0, it will
enhance the edges of an image in a certain direction. In the laplacians case,
the enhancement will take place in all directions. Here is why this will
happen:
If you have a 3x3 area of the image that all the pixels are equal to
the same same value, say all of the pixels are equal to 200, it will be
uniform in color and contain no edges. When the laplacian is passed
over this section, the result of the convolution will be 0, or the
absence of an edge. If you have an area the looks like:
100 100 100
200 200 200
250 250 250
The result of the convolution with the laplacian would be 150, and there
would be an edge present.
Take for example a run of pixels that looks like this: 0 150 200 That
area obviously contains an edge. On a graph it would look like this:
/|255
/ |200
/ |150
0-----|
If you used a 1x3 kernal with the values -1 0 1, the middle value would
become 200 ( (0x0=0) + (150x0=0) + (200x1=200)) and show the precence
of the edge. If the values of the run were 100 100 100, the area would
be uniform and the middle pixels value would become 0 and accurately
depict the uniform area.
A horizontal kernel looks like this:
Horizontal kernel:
-1 -1 -1
0 0 0
1 1 1
You will notice that the horizontal kernal will only detect edges in the
horizontal direction due to the direction of the of the kernal.
A vertical kernel looks like this:
Vertical kernel:
-1 0 1
-1 0 1
-1 0 1
The same goes for the vertical kernal, only the direction is in the vertical
direction.
There are other methods of edge detecion available that IMPROCES does
not implement at the present time. In fact there are new methods and
filters being invented everyday. Future versions of IMPROCES will
support convoling an area with a seperate filter for each direction and
other forms of edge detection.
There is also a variable called BOOST that is used when using the
convolution filters. BOOST is used to increase or decrease the amount of
the filter that is applied to the original image. What happens is the
pixels under the kernel are first multiplied by the corresponding
kernel value and then are multiplied by the BOOST value. A BOOST value
of less then 1.0 will lessen the effect the filter has, while a value
greater then 1.0 will increase the effect.
IMPROCES includes a Custom Filter function that lets you define the
kernel to use. You will also note that as of version 3.0 of IMPROCES,
there are separate functions for greyscale images and for color images.
The greyscale functions work a lot faster. The color functions must
convol the RGB attributes of each pixel and then search the palette for
the proper color to replace the pixel with. The color process rarely
will find an exact match for the color that gets produced from the
convolution, but it will find the closest possible match and use it.
The grey functions need only convol the LUT value of the pixels and
use the result to get an exact match.
Average and Median:
Two other filters that are included are the Average and Median filters.
Both of these use a 3x3 matrix as before, but only the values in the
image are used.
Average will find the average value of the pixels under the matrix and
replace the center pixel with that value. Median will find the middle
value used and use that. Both functions work the same for greyscale and
color images.
An important note for all of the functions that use a matrix or a kernel
is that the pixels on the edges (top, right, left, bottom) of the work
area will not be affected by the functions. This is because these pixels
do not have enough neighbors to be used in processing. In IMPROCES these
pixels are simply left alone.
Wrap it up!:
All in all, image processing is used in many fields; Astronomy,
dentistry, forensic science and X-Ray to just name a few. Software with
a price of $25 like IMPROCES and the price of personal computers
equipped with VGA and SVGA hardware dropping like a rock, have made
image processing available to the masses. It is exciting, useful and
most of all fun.
I hope this little primer has peeked your interest in image processing
and that it helps make some things that IMPROCES can do a little clearer.
If for some reason you obtained this article without a copy of IMPROCES,
the latest version of the program can be downloaded from the Dust Devil
BBS in Las Vegas, Nevada, (702)796-7134. I can also be reached at the
Dust Devil BBS if you have any questions about IMPROCES.